-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(job-distributor): add exp. backoff retry to feeds.SyncNodeInfo()
#15752
base: develop
Are you sure you want to change the base?
feat(job-distributor): add exp. backoff retry to feeds.SyncNodeInfo()
#15752
Conversation
core/config/toml/types.go
Outdated
InsecureFastScrypt *bool | ||
RootDir *string | ||
ShutdownGracePeriod *commonconfig.Duration | ||
FeedsManagerSyncInterval *commonconfig.Duration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New config option was added at the root level because I couldn't find a better place. Happy to move it elsewhere as per the maintainers advice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unit tests were skipped in this draft PR as I want to get some feedback about the approach before finishing the PR
AER Report: CI Core ran successfully ✅AER Report: Operator UI CI ran successfully ✅ |
301972c
to
83b1842
Compare
Hmm i wonder would this solve the connection issue? If there is communication issue between node and JD, how would the auto sync help resolve it? It will try and it will fail right? Alternatively would it be better to have some kind of exponential backoff retry when it does fail during the sync instead? (not that it will solve a permanent connection issue) |
core/services/feeds/service.go
Outdated
|
||
for _, manager := range managers { | ||
s.lggr.Infow("synchronizing node info", "managerID", manager.ID) | ||
err := s.SyncNodeInfo(ctx, manager.ID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
managers/JDs can be disabled/enabled, we should avoid synching to disabled JDs, we can use the DisabledAt
to determine it like at line 1130 of this file
core/services/feeds/service.go
Outdated
@@ -1550,6 +1553,32 @@ func (s *service) isRevokable(propStatus JobProposalStatus, specStatus SpecStatu | |||
return propStatus != JobProposalStatusDeleted && (specStatus == SpecStatusPending || specStatus == SpecStatusCancelled) | |||
} | |||
|
|||
func (s *service) periodicallySyncNodeInfo(ctx context.Context) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is also an assumption in code where JD is connected to the core node when SyncNodeInfo
is called, else it will return the error could not fetch client
which may not be a big deal, just noise. But if we could check if the nodes are connected before synching, that would be nice.
57b55cc
to
c5d0079
Compare
feeds.SyncNodeInfo()
c5d0079
to
a1a4281
Compare
There’s a behavior that we’ve observed for some time on the NOP side where they will add/update a chain configuration of the Job Distributor panel but the change is not reflected on the service itself. This leads to inefficiencies as NOPs are unaware of this and thus need to be notified so that they may "reapply" the configuration. After some investigation, we suspect that this is due to connectivity issues between the nodes and the job distributor instance, which causes the message with the update to be lost. This PR attempts to solve this by adding a "retry" wrapper on top of the existing `SyncNodeInfo` method. We rely on `avast/retry-go` to implement the bulk of the retry logic. It's configured with a minimal delay of 10 seconds, maximum delay of 30 minutes and retry a total of 56 times -- which adds up to a bit more than 24 hours. Ticket Number: DPA-1371
a1a4281
to
61297ab
Compare
Quality Gate passedIssues Measures |
As discussed earlier today, I went ahead and implemented your suggestion. I ran a few manual tests and it seems to work as expected, though I had to add some extra logic around the I still feel the background goroutine would be more resilient. But, on the other hand, this option does not require any runtime configuration -- I think we can safely hardcode the retry parameters -- which is a huge plus to me. |
Thanks @gustavogama-cll, yeah the background go-routine definitely has its pros, both approaches are valid, just that for me i think the retry is simpler. |
retry.Delay(5 * time.Second), | ||
retry.Delay(10 * time.Second), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delay is configured twice.
var ctx context.Context | ||
ctx, s.syncNodeInfoCancel = context.WithCancel(context.Background()) | ||
|
||
retryOpts := []retry.Option{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason we didn use retry.BackOffDelay
?
retry.Delay(5 * time.Second), | ||
retry.Delay(10 * time.Second), | ||
retry.MaxDelay(30 * time.Minute), | ||
retry.Attempts(48 + 8), // 30m * 48 =~ 24h; plus the initial 8 shorter retries |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where did you derive the 8? where do we configure the shorter retries?
} | ||
|
||
s.syncNodeInfoCancel() | ||
s.syncNodeInfoCancel = func() {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm wont this introduce race condition as each request that wants to update node info will try to set this variable?
@@ -141,6 +143,7 @@ type service struct { | |||
lggr logger.Logger | |||
version string | |||
loopRegistrarConfig plugins.RegistrarConfig | |||
syncNodeInfoCancel context.CancelFunc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think instead of using this to pass context, we should just have syncNodeInfoWithRetry
accept a context as parameter, each of the caller should have a context value to pass it in.
func (s *service) syncNodeInfoWithRetry(id int64) { | ||
// cancel the previous context -- and, by extension, the existing goroutine -- | ||
// so that we can start anew | ||
s.syncNodeInfoCancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont think we need to do this right?
If the caller of syncNodeInfoWithRetry
pass in their context which is scoped to a request, then we dont have to manually cancel each context. Each request should have its own retry. Eg request A should not cancel request B sync which is happening with this setup?
There’s a behavior that we’ve observed for some time on the NOP side where they will add/update a chain configuration of the Job Distributor panel but the change is not reflected on the service itself. This leads to inefficiencies as NOPs are unaware of this and thus need to be notified so that they may "reapply" the configuration.
After some investigation, we suspect that this is due to connectivity issues between the nodes and the job distributor instance, which causes the message with the update to be lost.
This PR attempts to solve this by adding a "retry" wrapper on top of the existing
SyncNodeInfo
method. We rely onavast/retry-go
to implement the bulk of the retry logic. It's configured with a minimal delay of 10 seconds, maximum delay of 30 minutes and retry a total of 56 times -- which adds up to a bit more than 24 hours.DPA-1371